The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy. However, the substantial communication overhead of FL makes training challenging when edge devices have limited network bandwidth. Existing work to optimize FL bandwidth overlooks downstream transmission and does not account for FL client sampling. In this paper we propose GlueFL, a framework that incorporates new client sampling and model compression algorithms to mitigate low download bandwidths of FL clients. GlueFL prioritizes recently used clients and bounds the number of changed positions in compression masks in each round. Across three popular FL datasets and three state-of-the-art strategies, GlueFL reduces downstream client bandwidth by 27% on average and reduces training time by 29% on average.
translated by 谷歌翻译
Partial MaxSAT (PMS) and Weighted PMS (WPMS) are two practical generalizations of the MaxSAT problem. In this paper, we propose a local search algorithm for these problems, called BandHS, which applies two multi-armed bandits to guide the search directions when escaping local optima. One bandit is combined with all the soft clauses to help the algorithm select to satisfy appropriate soft clauses, and the other bandit with all the literals in hard clauses to help the algorithm select appropriate literals to satisfy the hard clauses. These two bandits can improve the algorithm's search ability in both feasible and infeasible solution spaces. We further propose an initialization method for (W)PMS that prioritizes both unit and binary clauses when producing the initial solutions. Extensive experiments demonstrate the excellent performance and generalization capability of our proposed methods, that greatly boost the state-of-the-art local search algorithm, SATLike3.0, and the state-of-the-art SAT-based incomplete solver, NuWLS-c.
translated by 谷歌翻译
Learning an explainable classifier often results in low accuracy model or ends up with a huge rule set, while learning a deep model is usually more capable of handling noisy data at scale, but with the cost of hard to explain the result and weak at generalization. To mitigate this gap, we propose an end-to-end deep explainable learning approach that combines the advantage of deep model in noise handling and expert rule-based interpretability. Specifically, we propose to learn a deep data assessing model which models the data as a graph to represent the correlations among different observations, whose output will be used to extract key data features. The key features are then fed into a rule network constructed following predefined noisy expert rules with trainable parameters. As these models are correlated, we propose an end-to-end training framework, utilizing the rule classification loss to optimize the rule learning model and data assessing model at the same time. As the rule-based computation is none-differentiable, we propose a gradient linking search module to carry the gradient information from the rule learning model to the data assessing model. The proposed method is tested in an industry production system, showing comparable prediction accuracy, much higher generalization stability and better interpretability when compared with a decent deep ensemble baseline, and shows much better fitting power than pure rule-based approach.
translated by 谷歌翻译
Although numerous algorithms have been proposed to solve the categorical data clustering problem, how to access the statistical significance of a set of categorical clusters remains unaddressed. To fulfill this void, we employ the likelihood ratio test to derive a test statistic that can serve as a significance-based objective function in categorical data clustering. Consequently, a new clustering algorithm is proposed in which the significance-based objective function is optimized via a Monte Carlo search procedure. As a by-product, we can further calculate an empirical $p$-value to assess the statistical significance of a set of clusters and develop an improved gap statistic for estimating the cluster number. Extensive experimental studies suggest that our method is able to achieve comparable performance to state-of-the-art categorical data clustering algorithms. Moreover, the effectiveness of such a significance-based formulation on statistical cluster validation and cluster number estimation is demonstrated through comprehensive empirical results.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
本文考虑了有损神经图像压缩(NIC)的问题。当前的最新方法(SOTA)方法采用近似量化噪声的后部均匀的后方,单样本估计量近似于证据下限(ELBO)的梯度。在本文中,我们建议用多个样本重要性加权自动编码器(IWAE)目标训练NIC,该目标比Elbo更紧,并随着样本量的增加而收敛至对数的可能性。首先,我们确定NIC的均匀后验具有特殊的特性,这会影响IWAE目标的Pathiswise和得分函数估计器的方差和偏差。此外,从梯度差异的角度来看,我们提供了有关NIC中通常采用的技巧的见解。基于这些分析,我们进一步提出了多样本NIC(MS-NIC),这是NIC的IWAE靶标。实验结果表明,它改善了SOTA NIC方法。我们的MS-NIC是插件,可以轻松扩展到其他神经压缩任务。
translated by 谷歌翻译
非视线(NLOS)成像是一种用于检测障碍物或角落周围物体的物体的新兴技术。关于被动NLOS的最新研究主要集中在稳态测量和重建方法上,这些方法显示出识别移动目标的局限性。据我们所知,我们提出了一种新颖的基于事件的无源NLOS成像方法。我们获得了基于事件的异步数据,其中包含NLOS目标的详细动态信息,并有效缓解由运动引起的斑点降解。此外,我们创建了第一个基于事件的NLOS成像数据集NLOS-ES,并且由时间表面表示提取基于事件的功能。我们通过基于事件的数据与基于框架的数据比较重建。基于事件的方法在PSNR和LPIP上表现良好,该方法比基于框架的方法好20%和10%,而数据量仅占传统方法的2%。
translated by 谷歌翻译
在这项工作中,我们解决了共同跟踪手对象姿势并从野外深度点云序列重建形状的具有挑战性,HandTrackNet,以估计框架间的手动运动。我们的HandTrackNet提出了一个新型的手姿势构成典型化模块,以简化跟踪任务,从而产生准确且稳健的手工关节跟踪。然后,我们的管道通过将预测的手关节转换为基于模板的参数手模型mano来重建全手。对于对象跟踪,我们设计了一个简单而有效的模块,该模块从第一帧估算对象SDF并执行基于优化的跟踪。最后,采用联合优化步骤执行联合手和物体推理,从而减轻了闭塞引起的歧义并进一步完善了手姿势。在训练过程中,整个管道仅看到纯粹的合成数据,这些数据与足够的变化并通过深度模拟合成,以易于概括。整个管道与概括差距有关,因此可以直接传输到真实的野外数据。我们在两个真实的手对象交互数据集上评估我们的方法,例如HO3D和DEXYCB,没有任何填充。我们的实验表明,所提出的方法显着优于先前基于深度的手和对象姿势估计和跟踪方法,以9 fps的帧速率运行。
translated by 谷歌翻译
现有检测方法通常使用参数化边界框(Bbox)进行建模和检测(水平)对象,并将其他旋转角参数用于旋转对象。我们认为,这种机制在建立有效的旋转检测回归损失方面具有根本的局限性,尤其是对于高精度检测而言,高精度检测(例如0.75)。取而代之的是,我们建议将旋转的对象建模为高斯分布。一个直接的优势是,我们关于两个高斯人之间距离的新回归损失,例如kullback-leibler Divergence(KLD)可以很好地对齐实际检测性能度量标准,这在现有方法中无法很好地解决。此外,两个瓶颈,即边界不连续性和正方形的问题也消失了。我们还提出了一种有效的基于高斯度量的标签分配策略,以进一步提高性能。有趣的是,通过在基于高斯的KLD损失下分析Bbox参数的梯度,我们表明这些参数通过可解释的物理意义进行了动态更新,这有助于解释我们方法的有效性,尤其是对于高精度检测。我们使用量身定制的算法设计将方法从2-D扩展到3-D,以处理标题估计,并在十二个公共数据集(2-D/3-D,空中/文本/脸部图像)上进行了各种基本检测器的实验结果。展示其优越性。
translated by 谷歌翻译